Recently, the leading human pose estimation algorithms are heatmap-based algorithms. Heatmap decoding (i.e. transforming heatmaps to coordinates of human joint points) is a basic step of these algorithms. The existing heatmap decoding algorithms neglect the effect of systematic errors. Therefore, an error compensation based heatmap decoding algorithm was proposed. Firstly, an error compensation factor of the system was estimated during training. Then, the error compensation factor was used to compensate the prediction errors including both systematic error and random error of human joint points in the inference stage. Extensive experiments were carried out on different network architectures, input resolutions, evaluation metrics and datasets. The results show that compared with the existing optimal algorithm, the proposed algorithm achieves significant accuracy gain. Specifically, by using the proposed algorithm, the Average Precision (AP) of the HRNet-W48-256×192 model is improved by 2.86 percentage points on Common Objects in COntext (COCO)dataset, and the Percentage of Correct Keypoints with respect to head (PCKh) of the ResNet-152-256×256 model is improved by 7.8 percentage points on Max Planck Institute for Informatics (MPII)dataset. Besides, unlike the existing algorithms, the proposed algorithm did not need Gaussian smoothing preprocessing and derivation operation, so that it is 2 times faster than the existing optimal algorithm. It can be seen that the proposed algorithm has applicable values to performing fast and accurate human pose estimation.
Aiming at the problems of low detection efficiency and accuracy in the health management process of industrial robot axis, a new Health Index (HI) construction method based on action cycle degradation similarity measurement under the background of mechanical axis operation monitoring big data was proposed, and the robot Remaining Useful Life (RUL) prediction was carried out by combining Long Short-Term Memory (LSTM) network. Firstly, MPdist was used to focus on the similarity features of sub-cycle sequences between different action cycles of mechanical axis, and the deviation distance between normal cycle data and degradation cycle data was calculated, so that the HI was constructed. Then, the LSTM network model was trained by HI set, and the mapping relationship between HI and RUL was established. Finally, the MPdist-LSTM hybrid model was used to automatically calculate the RUL and give early warning in time. The six-axis industrial robot of a company was used to carry the experiments, and about 15 million pieces of data were collected. The monotonicity, robustness and trend of HI and Mean Absolute Error (MAE), Root Mean Square Error (RMSE), R-Square ( R 2 ), Error Range (ER), Early Prediction (EP) and Late Prediction (LP) of RUL were tested. The proposed method were compared with the methods such as Dynamic Time Warping (DTW), Euclidean Distance (ED), Time Domain Eigenvalue (TDE) combined with LSTM, MPdist combined with RNN and LSTM. The experimental results show that, compared with other comparison methods, the proposed method has the HI monotonicity and trend higher by at least 0.07 and 0.13 respectively, the higher RUL prediction accuracy, and the smaller ER, which verifies the effectiveness of the proposed method.
Since it is necessary to evaluate and analyze the service performance of cloud computing center to guarantee Quality of Service (QoS) and avoid violation of Service Layer Agreement (SLA), a approximated analysis model based on M/M/n/n+r queue theory was proposed for cloud computing center. By solving this model the probability distribution function of response time and other QoS indicators were acquired, meanwhile the relationship among the number of servers, size of queue buffers, response time, blocking probability and instance service probability were revealed and verified by simulation.The experimental results indicate that improving server service rate is better than increasing the number of servers for improving service performance.